Goto

Collaborating Authors

 consistent separating set


Constraint-based Causal Structure Learning with Consistent Separating Sets

Neural Information Processing Systems

We consider constraint-based methods for causal structure learning, such as the PC algorithm or any PC-derived algorithms whose first step consists in pruning a complete graph to obtain an undirected graph skeleton, which is subsequently oriented. All constraint-based methods perform this first step of removing dispensable edges, iteratively, whenever a separating set and corresponding conditional independence can be found. Yet, constraint-based methods lack robustness over sampling noise and are prone to uncover spurious conditional independences in finite datasets. In particular, there is no guarantee that the separating sets identified during the iterative pruning step remain consistent with the final graph. In this paper, we propose a simple modification of PC and PC-derived algorithms so as to ensure that all separating sets identified to remove dispensable edges are consistent with the final graph,thus enhancing the explainability of constraint-basedmethods. It is achieved by repeating the constraint-based causal structure learning scheme, iteratively, while searching for separating sets that are consistent with the graph obtained at the previous iteration. Ensuring the consistency of separating sets can be done at a limited complexity cost, through the use of block-cut tree decomposition of graph skeletons, and is found to increase their validity in terms of actual d-separation. It also significantly improves the sensitivity of constraint-based methods while retaining good overall structure learning performance. Finally and foremost, ensuring sepset consistency improves the interpretability of constraint-based models for real-life applications.


Reviews: Constraint-based Causal Structure Learning with Consistent Separating Sets

Neural Information Processing Systems

Summary ------- The authors address a major drawback of constraint-based causal structure learning algorithms (PC algorithm and derivatives), namely, that for finite sample sizes the outputted graphs may be inconsistent regarding separating sets: The final graph may imply different separating sets than those identified in the algorithm. This implies in particular that outputted graphs are not guarenteed to belong to their presumed class of graphical models, for example CPDAGs or PAGs. The main reason is that PC-based methods remove too many true links in the skeleton phase. The authors' solution is based on an iterative application of a modified version of PC until the final graph is consistent with the separating sets. They prove that their solution fixes the problem and demonstrate these improvements with numerical experiments.


Reviews: Constraint-based Causal Structure Learning with Consistent Separating Sets

Neural Information Processing Systems

A clever extension to the PC algorithm for causal structure learning aimed to address inconsistency of results in terms of separating sets between the pruning step and the final graph. The new approach is somewhat incremental, but the authors provide some new formal guarantees. Experiments are reasonable, although they could be much better (please see reviews, this is also acknowledged by the authors, as currently one may wonder about the advantages wrt PC). I also suggest the authors to motivate the novelty and how it can improve/has improved results, in particular in view of a higher computational complexity.


Constraint-based Causal Structure Learning with Consistent Separating Sets

Neural Information Processing Systems

We consider constraint-based methods for causal structure learning, such as the PC algorithm or any PC-derived algorithms whose first step consists in pruning a complete graph to obtain an undirected graph skeleton, which is subsequently oriented. All constraint-based methods perform this first step of removing dispensable edges, iteratively, whenever a separating set and corresponding conditional independence can be found. Yet, constraint-based methods lack robustness over sampling noise and are prone to uncover spurious conditional independences in finite datasets. In particular, there is no guarantee that the separating sets identified during the iterative pruning step remain consistent with the final graph. In this paper, we propose a simple modification of PC and PC-derived algorithms so as to ensure that all separating sets identified to remove dispensable edges are consistent with the final graph,thus enhancing the explainability of constraint-basedmethods.


Constraint-based Causal Structure Learning with Consistent Separating Sets

Li, Honghao, Cabeli, Vincent, Sella, Nadir, Isambert, Herve

Neural Information Processing Systems

We consider constraint-based methods for causal structure learning, such as the PC algorithm or any PC-derived algorithms whose first step consists in pruning a complete graph to obtain an undirected graph skeleton, which is subsequently oriented. All constraint-based methods perform this first step of removing dispensable edges, iteratively, whenever a separating set and corresponding conditional independence can be found. Yet, constraint-based methods lack robustness over sampling noise and are prone to uncover spurious conditional independences in finite datasets. In particular, there is no guarantee that the separating sets identified during the iterative pruning step remain consistent with the final graph. In this paper, we propose a simple modification of PC and PC-derived algorithms so as to ensure that all separating sets identified to remove dispensable edges are consistent with the final graph,thus enhancing the explainability of constraint-basedmethods. It is achieved by repeating the constraint-based causal structure learning scheme, iteratively, while searching for separating sets that are consistent with the graph obtained at the previous iteration.